metric-free individual fairness
Metric-Free Individual Fairness in Online Learning
We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly. Unlike prior work on individual fairness, we do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form.
Review for NeurIPS paper: Metric-Free Individual Fairness in Online Learning
The paper concerns a new online learning problem subject to the constraint of individual fairness. It provides a framework that reduces online classification in the considered model to standard online classification, obtaining an algorithm with sublinear regret both in terms of accuracy and fairness, as well as strong generalization bounds in the i.i.d. All the reviewers liked the paper and the proposed metric-free approach. The appreciated an interesting problem formulation and a clean reduction technique to a known online learning problem. The paper received very high uniform scores of 8 from each reviewer. The reviewers found some issues with the presentation, and I hope the authors will address them in the final version of the manuscript.
Metric-Free Individual Fairness in Online Learning
We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly. Unlike prior work on individual fairness, we do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form. In each round, the auditor examines the learner's decisions and attempts to identify a pair of individuals that are treated unfairly by the learner. We provide a general reduction framework that reduces online classification in our model to standard online classification, which allows us to leverage existing online learning algorithms to achieve sub-linear regret and number of fairness violations. Surprisingly, in the stochastic setting where the data are drawn independently from a distribution, we are also able to establish PAC-style fairness and accuracy generalization guarantees (Rothblum and Yona (2018)), despite only having access to a very restricted form of fairness feedback.